pedestrian trajectory prediction
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > India > Uttarakhand > Roorkee (0.04)
- Information Technology (0.68)
- Transportation > Ground > Road (0.46)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Data Science > Data Quality (0.68)
ViTE: Virtual Graph Trajectory Expert Router for Pedestrian Trajectory Prediction
Li, Ruochen, Zhu, Zhanxing, Qiao, Tanqiu, Shum, Hubert P. H.
Pedestrian trajectory prediction is critical for ensuring safety in autonomous driving, surveillance systems, and urban planning applications. While early approaches primarily focus on one-hop pairwise relationships, recent studies attempt to capture high-order interactions by stacking multiple Graph Neural Network (GNN) layers. However, these approaches face a fundamental trade-off: insufficient layers may lead to under-reaching problems that limit the model's receptive field, while excessive depth can result in prohibitive computational costs. We argue that an effective model should be capable of adaptively modeling both explicit one-hop interactions and implicit high-order dependencies, rather than relying solely on architectural depth. To this end, we propose ViTE (Virtual graph Trajectory Expert router), a novel framework for pedestrian trajectory prediction. ViTE consists of two key modules: a Virtual Graph that introduces dynamic virtual nodes to model long-range and high-order interactions without deep GNN stacks, and an Expert Router that adaptively selects interaction experts based on social context using a Mixture-of-Experts design. This combination enables flexible and scalable reasoning across varying interaction patterns. Experiments on three benchmarks (ETH/UCY, NBA, and SDD) demonstrate that our method consistently achieves state-of-the-art performance, validating both its effectiveness and practical efficiency.
- Europe > United Kingdom > England > Hampshire > Southampton (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology (0.86)
- Transportation > Ground > Road (0.34)
Social LSTM with Dynamic Occupancy Modeling for Realistic Pedestrian Trajectory Prediction
Alia, Ahmed, Chraibi, Mohcine, Seyfried, Armin
In dynamic and crowded environments, realistic pedestrian trajectory prediction remains a challenging task due to the complex nature of human motion and the mutual influences among individuals. Deep learning models have recently achieved promising results by implicitly learning such patterns from 2D trajectory data. However, most approaches treat pedestrians as point entities, ignoring the physical space that each person occupies. To address these limitations, this paper proposes a novel deep learning model that enhances the Social LSTM with a new Dynamic Occupied Space loss function. This loss function guides Social LSTM in learning to avoid realistic collisions without increasing displacement error across different crowd densities, ranging from low to high, in both homogeneous and heterogeneous density settings. Such a function achieves this by combining the average displacement error with a new collision penalty that is sensitive to scene density and individual spatial occupancy. For efficient training and evaluation, five datasets were generated from real pedestrian trajectories recorded during the Festival of Lights in Lyon 2022. Four datasets represent homogeneous crowd conditions -- low, medium, high, and very high density -- while the fifth corresponds to a heterogeneous density distribution. The experimental findings indicate that the proposed model not only lowers collision rates but also enhances displacement prediction accuracy in each dataset. Specifically, the model achieves up to a 31% reduction in the collision rate and reduces the average displacement error and the final displacement error by 5% and 6%, respectively, on average across all datasets compared to the baseline. Moreover, the proposed model consistently outperforms several state-of-the-art deep learning models across most test sets.
- Europe > Germany (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Asia > Middle East > Palestine (0.04)
- Asia > China > Guangxi Province > Nanning (0.04)
VISTA: A Vision and Intent-Aware Social Attention Framework for Multi-Agent Trajectory Prediction
Martins, Stephane Da Silva, Aldea, Emanuel, Hégarat-Mascle, Sylvie Le
Multi-agent trajectory prediction is a key task in computer vision for autonomous systems, particularly in dense and interactive environments. Existing methods often struggle to jointly model goal-driven behavior and complex social dynamics, which leads to unrealistic predictions. In this paper, we introduce VISTA, a recursive goal-conditioned transformer architecture that features (1) a cross-attention fusion mechanism to integrate long-term goals with past trajectories, (2) a social-token attention module enabling fine-grained interaction modeling across agents, and (3) pairwise attention maps to show social influence patterns during inference. Our model enhances the single-agent goal-conditioned approach into a cohesive multi-agent forecasting framework. In addition to the standard evaluation metrics, we also consider trajectory collision rates, which capture the realism of the joint predictions. Evaluated on the high-density MADRAS benchmark and on SDD, VISTA achieves state-of-the-art accuracy with improved interaction modeling. On MADRAS, our approach reduces the average collision rate of strong baselines from 2.14% to 0.03%, and on SDD, it achieves a 0% collision rate while outperforming SOTA models in terms of ADE/FDE and minFDE. These results highlight the model's ability to generate socially compliant, goal-aware, and interpretable trajectory predictions, making it well-suited for deployment in safety-critical autonomous systems.
Mapping the Urban Mobility Intelligence Frontier: A Scientometric Analysis of Data-Driven Pedestrian Trajectory Prediction and Simulation
Understanding and predicting pedestrian dynamics has become essential for shaping safer, more responsive, and human-centered urban environments. This study conducts a comprehensive scien-tometric analysis of research on data-driven pedestrian trajectory prediction and crowd simulation, mapping its intellectual evolution and interdisciplinary structure. Using bibliometric data from the Web of Science Core Collection, we employ SciExplorer and Bibliometrix to identify major trends, influential contributors, and emerging frontiers. Results reveal a strong convergence between artificial intelligence, urban informatics, and crowd behavior modeling--driven by graph neural networks, transformers, and generative models. Beyond technical advances, the field increasingly informs urban mobility design, public safety planning, and digital twin development for smart cities. However, challenges remain in ensuring interpretability, inclusivity, and cross-domain transferability. By connecting methodological trajectories with urban applications, this work highlights how data-driven approaches can enrich urban governance and pave the way for adaptive, socially responsible mobility intelligence in future cities. Introduction Pedestrian trajectory prediction and simulation is an interdisciplinary research field where data-driven models, particularly machine learning and deep learning techniques, are employed to model, predict, and simulate human movement dynamics in diverse environments [4, 8, 17]. Although research on pedestrian dynamics can be traced back to the seminal social force model [7], the advent of large-scale mobility datasets and sensing technologies has substantially transformed the landscape in recent decades.
- North America > United States (0.28)
- Asia > China > Guangxi Province > Nanning (0.05)
- Oceania > Australia (0.04)
- (10 more...)
- Health & Medicine (1.00)
- Transportation > Infrastructure & Services (0.47)
- Transportation > Ground > Road (0.46)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > India > Uttarakhand > Roorkee (0.04)
- Information Technology (0.68)
- Transportation > Ground > Road (0.46)
- Information Technology > Data Science > Data Quality (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
EgoTraj-Bench: Towards Robust Trajectory Prediction Under Ego-view Noisy Observations
Liu, Jiayi, Zhou, Jiaming, Ye, Ke, Lin, Kun-Yu, Wang, Allan, Liang, Junwei
Reliable trajectory prediction from an ego-centric perspective is crucial for robotic navigation in human-centric environments. However, existing methods typically assume idealized observation histories, failing to account for the perceptual artifacts inherent in first-person vision, such as occlusions, ID switches, and tracking drift. This discrepancy between training assumptions and deployment reality severely limits model robustness. To bridge this gap, we introduce EgoTraj-Bench, the first real-world benchmark that grounds noisy, first-person visual histories in clean, bird's-eye-view future trajectories, enabling robust learning under realistic perceptual constraints. Building on this benchmark, we propose BiFlow, a dual-stream flow matching model that concurrently denoises historical observations and forecasts future motion by leveraging a shared latent representation. To better model agent intent, BiFlow incorporates our EgoAnchor mechanism, which conditions the prediction decoder on distilled historical features via feature modulation. Extensive experiments show that BiFlow achieves state-of-the-art performance, reducing minADE and minFDE by 10-15% on average and demonstrating superior robustness. We anticipate that our benchmark and model will provide a critical foundation for developing trajectory forecasting systems truly resilient to the challenges of real-world, ego-centric perception.
Intention-Aware Diffusion Model for Pedestrian Trajectory Prediction
Liu, Yu, Liu, Zhijie, Ren, Xiao, Li, You-Fu, Kong, He
Predicting pedestrian motion trajectories is critical for the path planning and motion control of autonomous vehicles. Recent diffusion-based models have shown promising results in capturing the inherent stochasticity of pedestrian behavior for trajectory prediction. However, the absence of explicit semantic modelling of pedestrian intent in many diffusion-based methods may result in misinterpreted behaviors and reduced prediction accuracy. To address the above challenges, we propose a diffusion-based pedestrian trajectory prediction framework that incorporates both short-term and long-term motion intentions. Short-term intent is modelled using a residual polar representation, which decouples direction and magnitude to capture fine-grained local motion patterns. Long-term intent is estimated through a learnable, token-based endpoint predictor that generates multiple candidate goals with associated probabilities, enabling multimodal and context-aware intention modelling. Furthermore, we enhance the diffusion process by incorporating adaptive guidance and a residual noise predictor that dynamically refines denoising accuracy. The proposed framework is evaluated on the widely used ETH, UCY, and SDD benchmarks, demonstrating competitive results against state-of-the-art methods.
- Asia > China > Hong Kong (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
SceneAware: Scene-Constrained Pedestrian Trajectory Prediction with LLM-Guided Walkability
Accurate prediction of pedestrian trajectories is essential for applications in robotics and surveillance systems. While existing approaches primarily focus on social interactions between pedestrians, they often overlook the rich environmental context that significantly shapes human movement patterns. In this paper, we propose SceneAware, a novel framework that explicitly incorporates scene understanding to enhance trajectory prediction accuracy. Our method leverages a Vision Transformer~(ViT) scene encoder to process environmental context from static scene images, while Multi-modal Large Language Models~(MLLMs) generate binary walkability masks that distinguish between accessible and restricted areas during training. We combine a Transformer-based trajectory encoder with the ViT-based scene encoder, capturing both temporal dynamics and spatial constraints. The framework integrates collision penalty mechanisms that discourage predicted trajectories from violating physical boundaries, ensuring physically plausible predictions. SceneAware is implemented in both deterministic and stochastic variants. Comprehensive experiments on the ETH/UCY benchmark datasets show that our approach outperforms state-of-the-art methods, with more than 50\% improvement over previous models. Our analysis based on different trajectory categories shows that the model performs consistently well across various types of pedestrian movement. This highlights the importance of using explicit scene information and shows that our scene-aware approach is both effective and reliable in generating accurate and physically plausible predictions. Code is available at: https://github.com/juho127/SceneAware.
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Pedestrian Trajectory Prediction with Missing Data: Datasets, Imputation, and Benchmarking
Pedestrian trajectory prediction is crucial for several applications such as robotics and self-driving vehicles. Significant progress has been made in the past decade thanks to the availability of pedestrian trajectory datasets, which enable trajectory prediction methods to learn from pedestrians' past movements and predict future trajectories. However, these datasets and methods typically assume that the observed trajectory sequence is complete, ignoring real-world issues such as sensor failure, occlusion, and limited fields of view that can result in missing values in observed trajectories. To address this challenge, we present TrajImpute, a pedestrian trajectory prediction dataset that simulates missing coordinates in the observed trajectory, enhancing real-world applicability. TrajImpute maintains a uniform distribution of missing data within the observed trajectories.